专利摘要:
A system and method for performing interleaved packet processing in a network router. A packet to be routed includes a source address bit pattern and a destination address bit pattern that are each processed by a task processor in accordance with a data tree. The data tree includes multiple nodes linked by branches wherein an instruction that is associated with each node within the data tree is utilized for determining which branch is to be taken in accordance with the source address bit pattern or the destination address bit pattern. A first bank of registers is utilized to load an instruction to be executed by said task processor at each node of the data tree in accordance with the source address bit pattern. A second bank of registers is utilized for loading an instruction to be executed by the task processor at each node of the data tree in accordance with the destination address bit pattern. A task scheduler enables the first bank of registers to transfer an instruction loaded therein for processing by the task processor only during even time cycles and for enabling the second bank of registers to transfer an instruction loaded therein for processing by the task processor only during odd time cycles.
公开号:US20010007559A1
申请号:US09/753,921
申请日:2001-01-03
公开日:2001-07-12
发明作者:Jean Le Pennec;Claude Pin;Alain Benayoun;Patrick Michel
申请人:International Business Machines Corp;
IPC主号:H04L45-60
专利说明:
[0001] 1. Technical Field [0001]
[0002] The present invention relates generally to systems for processing routing and filtering information FOR each packet received at a router node of a data transmission network, and in particular to an interleaved processing system based upon a tree lookup structure in a network router. [0002]
[0003] 2. Description of the Related Art [0003]
[0004] Today, data communication systems are based upon data transmission networks wherein routers are used to link remote sites. Routing is a major bottleneck in such systems primarily due to the overhead processing time and the additional memory capacity required for routing. One of the primary routing functions entails determining a particular routing path for a packet or frame across the network using specific protocols. This path determination is based on a variety of metrics such as the delay introduced by the network or the link cost. In addition, this determination takes into account other rules generically called filtering, such as communication restrictions or priority criteria. [0004]
[0005] Another important routing function is packet forwarding which entails processing of inbound data packets and the subsequent forwarding of these data packets to the appropriate outbound destination in accordance with a destination address within the packet header. [0005]
[0006] Both the determination of the routing path and packet forwarding based upon the destination address field in the packet header, are performed within the same routing device. Nevertheless, new techniques tend to exploit the difference between these functions, thus separating the corresponding operations. For example, a single routing path processing unit could support several packet forwarding units. [0006]
[0007] Since the routing processing time is relatively high and varies from one routing computation to another, it is difficult to support multiple time sensitive applications such as multimedia. For both the filtering and packet forwarding processing, memory searches consume considerable time. Within the routing processing context, “searching” entails retrieving routing information encoded within a predetermined bit pattern of an packet address header. In particular, the destination of a data packet typically corresponds to such a bit pattern within the destination address field of the packet header. The required search involves comparing a portion of the destination address bit pattern to a predetermined bits sequence, or “keys”, that identify appropriate routing information. Efforts have been made to optimize the speed of such comparison based searches (often referred to as prefix matching searches) by using parallel processing, but this method admits its own limitations. [0007]
[0008] In addition to packet forwarding, a typical packet routing cycle includes a filtering process that is performed with respect to a source address field in the packet header. Today, the routing process and the filtering process are either performed sequentially utilizing a single processing entity, or are performed simultaneously utilizing two separate processing units. For each process, there are two phases which are repeated until the end of the process. These processes are typically referred to as an instruction loading phase and an instruction processing phase. It should be noted that, in classical systems, the loading and processing phases are very close in duration. [0008]
[0009] The routing function for determining an outgoing destination node to which a packet will be forwarded, typically utilizes a longest prefix matching (LPM) algorithm that is generally implemented in a tree structure. The filtering process may also utilize this same type of process and algorithm but in a separate tree structure. The trees utilized for routing and filtering are implemented in a memory structure containing an instruction at each tree node, wherein each instruction provides a link to sub-nodes. The tree structure is thus traversed starting from a tree root down to a leaf node that contains an instruction which provides either the routing information or the filtering rules. This last instruction is very often provided in an indirect manner such as by an address value that corresponds to the field that contains the routing, or by filtering information for each leaf node. For further information regarding the nature of LPM searching, reference is made to the article “Routing on longest-matching prefixes”, IEEE/ACM transactions on networking, vol. 4, n[0009] 1, February 1996, pages 86-97 which is incorporated herein by reference.
[0010] Systems currently available for processing the routing function or the filtering function using a tree structure require considerable memory overhead for storing the information in each node, and also consume considerable processing overhead for the requisite large number of memory accesses. From the foregoing, it can be appreciated that a need exists for a processing system that enables an optimized the processing system and method for performing both routing and filtering functions within a router. [0010] SUMMARY OF THE INVENTION
[0011] A system and method for performing interleaved packet processing in a network router are disclosed herein. A packet to be routed includes a source address bit pattern and a destination address bit pattern that are each processed by a task processor in accordance with a data tree. The data tree includes multiple nodes linked by branches wherein an instruction that is associated with each node within the data tree is utilized for determining which branch is to be taken in accordance with the source address bit pattern or the destination address bit pattern. A first bank of registers is utilized to load an instruction to be executed by said task processor at each node of the data tree in accordance with the source address bit pattern. A second bank of registers is utilized for loading an instruction to be executed by the task processor at each node of the data tree in accordance with the destination address bit pattern. A task scheduler enables the first bank of registers to transfer an instruction loaded therein for processing by the task processor only during even time cycles and for enabling the second bank of registers to transfer an instruction loaded therein for processing by the task processor only during odd time cycles. [0011]
[0012] All objects, features, and advantages of the present invention will become apparent in the following detailed written description. [0012] BRIEF DESCRIPTION OF THE DRAWINGS
[0013] The novel features believed characteristic of the invention are set forth in the appended claims. The invention itself however, as well as a preferred mode of use, further objects and advantages thereof, will best be understood by reference to the following detailed description of an illustrative embodiment when read in conjunction with the accompanying drawings, wherein: [0013]
[0014] FIG. 1 illustrates a tree structure utilized with the interleaved packet processing performed in accordance with a preferred embodiment of the present invention; [0014]
[0015] FIG. 2 is a block diagram depicting an interleaved packet processing system in accordance with a preferred embodiment of the present invention; [0015]
[0016] FIG. 3 is a timing diagram illustrating interleaved tasks performed by a task processor in accordance with a preferred embodiment of the present invention; [0016]
[0017] FIG. 4 is a detailed block diagram depicting an interleaved packet processing system in accordance with a preferred embodiment of the present invention; and [0017]
[0018] FIG. 5 illustrates a representative format of an lookup instruction utilized in the interleaved packet processing system of the present invention. [0018] DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
[0019] With reference now to the figures wherein like reference numerals refer to like and corresponding parts throughout, and in particular with reference to FIG. 1, there is illustrated a tree structure utilized with the interleaved packet processing performed in accordance with a preferred embodiment of the present invention. As depicted in FIG. 1, a top level A of a tree [0019] 100, is commonly referred to as the “root” node. Intermediate nodes, referred to alternately as “branches”, are represented by letters B to K. End nodes (“leafs”) represent a corresponding instruction for a packet such as the route for the packet to follow such as RT0 to RT5. For the case in which tree 100 is a routing tree for performing destination address searching, the result of a leaf instruction may be to accept a routing instruction for the packet. If tree 100 is a filter check tree, the result of the tree instruction search may be to determine relevant network routing parameters such as the priority of the packet. Additional filtering decision may be performed utilizing a higher level protocol type.
[0020] The left part of tree [0020] 100 starting from A and proceeding to node B, etc. represents a classical binary tree search in which one bit position per unit time is analyzed. As illustrated on the left part of tree 100, the left branch is selected and traversed when the result of the instruction contained in each branch node is a binary 0 and the right branch is selected when the instruction yields a binary 1.
[0021] The right side of tree [0021] 100 starting from A and proceeding to node C, etc. represents a tree search including some binary elements and other more complex elements. For example, node K includes four branches that are selectable based on a two-bit analysis and node G which has branches that are selectable based on a comparison done on several long binary patterns (5 bits in the example).
[0022] For the embodiments depicted by the figures herein, it is assumed that multiple 16-bit instruction words are associated with each node for routing or filtering processing. The selection of a 16-bit instruction is based on the current state of the art in which 16-bit instruction words are common. It should be noted, however, that for complex decision methods or for cases in which more than two branches are involved, instructions can have different lengths without departing from the spirit or scope of the present invention. [0022]
[0023] With reference now to FIG. 2, there is illustrated a block diagram depicting an interleaved packet processing system in accordance with a preferred embodiment of the present invention. Specifically, an interleaved packet processing system [0023] 200 includes a bi-task processing engine 10 connected to an external memory 12 by a standard address bus, data bus and control bus represented schematically by a bus 14. External memory 12 stores instructions that are executed at each node of the tree and is divided into two sub-memories 12-1 and 12-2. Sub-memory 12-1 contains normal size (16-bit) instructions that are executed only in response to particular predefined values on most significant bit (MSB) of a binary address sequence, while area 12-2 contains dual (double) size instructions. In accordance with a preferred embodiment of the present invention, external memory 12 contains the instructions of two independent trees.
[0024] As further depicted in FIG. 2, processing engine [0024] 10 includes a main task processor 16 that can be either a finite state machine or a nano-processor and a task scheduler 18 that generates “thread” clocking composed of successive alternate even and odd time cycles and enables processing activity and loading for each task. Task scheduler 18 is connected to task processor 16 to initiate a node process. Task scheduler 18 is also connected to registers banks, bank A 20 and bank B 22 via activation lines 24 and 26.
[0025] Banks [0025] 20 and 22 contain respectively the instructions associated with each node in a given tree. Activation signal carried from task scheduler 18 on activation lines 24 and 26 are utilized for loading an instruction from external memory 12 to one of banks 20 or 22 via external bus 14. Activation signals on activation lines 24 and 26 also activate the transfer of an instruction from the other bank (the bank not being loaded from external memory 12) to task processor 16 via an internal bus 28 for processing the instruction. At any given time, a bank has only one of the above access valid signals carried on bus 28 while the other bank has access to bus 14. This bus connection is inverted at each edge of the thread clock allowing the process of an instruction on one tree while an instruction of the other tree is loaded into its corresponding bank. In a preferred embodiment of the present invention, one of banks 20 or 22 is associated with a source address tree while the other bank is associated with a destination address tree in an address lookup mechanism.
[0026] Multiple temporary registers [0026] 30 within processing engine 10 contain information for each task that is maintained between two consecutive processing periods of processor 16. Temporary registers 30 are particularly useful when the processing of an instruction is split into two times cycles of the clock within task scheduler 18. The address bit patterns that are processed by instructions loaded into processor 16 are provided by a system bus 32 into a pattern register A 34 and a pattern register B 36.
[0027] Turning now to FIG. 3, there is illustrated a timing diagram depicting the interleaving of tasks performed by task processor [0027] 16 as per the direction of task scheduler 18 in accordance with a preferred embodiment of the present invention. As shown in FIG. 3, the thread clock maintained by task scheduler 18 has a rising edge when process A is activated and a falling edge when the process B starts. The instruction structure encompassed by processing engine 10 is designed to operating in temporal increments corresponding exactly to the duration of each clock cycle which is constant so that there is no need to check whether process A is completed or not, and similarly for process B.
[0028] While task A is processed, a next instruction for task B is loaded into register bank B. Similarly, while task B is being processed, a next instruction for task A is loaded into bank A. The result is a continual interleaving process in which task A and task B until each process reaches a leaf node. [0028]
[0029] If there is a need to process a longer instruction (stored in the second memory area [0029] 12-2) that cannot fit within a single cycle, temp register 30 within processing engine 10 enables utilization of two cycles for processing such an instruction. FIG. 3 depicts a dual loading of a long instruction during task A. As shown in FIG. 3, processing for task A is delayed until the full loading is completed over two task A loading cycles that are interleaved with process cycles as per the scheduling provided by task scheduler 18. It is also possible to start part of the process and to set A or B state bit (illustrated in FIG. 4) when the cycle ends and before the end of the cycle, to store intermediate state or results into the temp register 30 dedicated to the corresponding task.
[0030] The dual processing of an instruction for task A is also depicted in FIG. 3 for the case in which a single size instruction halts the loading during one thread until the instruction is fully processed. This mechanism requires the use of temp register [0030] 30, as task processor 18 will process task B in between processing intervals for the processing of task A.
[0031] With reference now to FIG. 4, there is illustrated a detailed block diagram depicting interleaved packet processing system [0031] 200 in accordance with a preferred embodiment of the present invention. At the beginning of a packet address processing interval, bank A 20 is loaded by a bit pattern that provides as a next instruction, the root address for tree A. The stored address for the next instruction (the root address in the first step but contained in the instruction for further steps) is provided by task processor 16 via NEXT ADD bus 40 and driver 42 to bank A.
[0032] The address contents are loaded into an ADD (address) register [0032] 44 via bank data bus 46 as “InsAD” (Instruction Address) in response to a BKRD Bank read command activated by a NEXTCMD signal from task processor 16 on line 48. The NXTCMD signal instructs ADD register 44 to load this next address for instruction. NEXTCMD includes the index (IX) (described in FIG. 5) used to increment the next address for instruction. ADD register 44 is a register/counter loaded by “InsAD” which is the address of the first word of the instruction to be loaded and whose value is incremented by an integrated counter (not depicted) to load the other words of the instruction.
[0033] When a rising edge is detected on CKA line [0033] 50 from task scheduler 18, the first address corresponding to the initial state of the counter is presented on external add bus 14 with a ext read command on line 51 in order to access the external memory. A driver 52 is activated by Ext read command on line 51 and a driver 42 is also opened to present the least significant bits (LSBs) of the address to load the memory field into the corresponding register of bank A 20 with a bank write command BKWR. As the first address indicates whether the instruction is a single instruction (located in area 12-1) or a dual instruction (located in area 12-2), it is possible to stop the counter at the end of the cycle on the last word that should be downloaded.
[0034] In case of a dual size instruction, a bit DUAL is set within ADD register [0034] 44 which prevents task scheduler 18 through CKA line 50 from reloading the address from bank A to ADD register 44 thus allowing the counter to continue to load the remaining words of the instruction into bank A during the next cycle. The size of bank A is designed to accept a dual size instruction. For each word loaded from memory in response to a read signal and a chip select signal, ADD register 44 selects the appropriate register in bank A by using the LSB bits of the counter value on bank add bus 54 as the register address to load the instruction word in the appropriate bank register in response to a bank write (BKWR) to this location. Upon occurrence of the next CKA rising edge, the DUAL bit is tested. If the DUAL bit is set, the counter continues and loads the additional instruction words until the counter reaches its limit. If the DUAL bit is not set, ADD register 44 loads a BKWR Read command into the register of bank A selected by NEXT ADD on bus 40 from task processor 16. The loaded BKWR Read command allows the InsAd Instruction Address to be delivered on bank data bus 46 and then stored in ADD register 44.
[0035] When the instruction is loaded in bank A [0035] 20, task scheduler 18 delivers a CKA signal to inform task processor 16 to fetch the first instruction word in bank A. The first register address is delivered by task processor 16 via add bus 56. A read is then performed on bank A in response to a SEL A command on line 58, and the first instruction word provided from register bank A on data bus 60 is utilized by task processor 16 which can then execute the instruction based on the pattern associated with the loaded instruction to compare it with pattern A in register 34. The result of such a comparison may be to load other instruction words from bank A using the same mechanism. At the end of a processing cycle corresponding to completion of a single clock cycle, either the process is not completed and temporary results are stored in Temp Register 30-1 for task A or the process of the current instruction is finished. Temp register 30-2 is reserved for temporary storage of task B.
[0036] In the latter case, when the instruction process is completed, the position (address in bank A) corresponding to the address of the next instruction in bank A is put on next ADD bus [0036] 40 using a few bits (3 for eight register positions in bank A) in order to provide the address for the next loading to ADD register 44.
[0037] In the former case, in which the instruction process is not completed, two 1-bit A state and B state registers within task processor [0037] 16 are each utilized to define the state at the end of the processing cycle that is used when more than one cycle is processed. The setting of the single bit in A state and B state registers indicates whether the next instruction to be processed is a continuation of the previous processing activity or a new instruction.
[0038] Turning now to FIG. 5, there is illustrated a representative format of an lookup instruction utilized in the interleaved packet processing system of the present invention. As shown in FIG. 5, each instruction contains three main fields. The first of these fields includes the instruction itself that is executed by the task processor. The second field in a comparison field which contains the pattern previously stored within the external memory (in a table, for example) to compare with the A or B pattern that correspond to a source and destination address bit pattern at the current point of the address processing analysis. Finally, the lookup instruction includes a next address field that contains the addresses for the possible next instructions. [0038]
[0039] The instruction itself is generally defined in a single word. The contents of the instruction field define the mode of analysis to perform such as one bit, two bits or three bits full comparison resulting in two, four, or eight branches, or, whether a multi-bit pattern comparison should be performed using further comparison fields. The instruction field contents also include fields defining the number of elements (bits) within the comparison field, and the number of bits in the next address field that defines the size of the instruction and informs the processor of whether or not an instruction is fully loaded. [0039]
[0040] In full comparison mode, additional fields are defined that identify the next address to use for each output case that can be a direct value or an indexed value. There is one such sub-field for each possible branch. FIG. 5 illustrates a case in which 4 branches corresponding to a two bits full comparison: branches 00, 01, 10 and 11. [0040]
[0041] The index (IX) is a 2-bit field that indicates the actual address value based on the address given as a base address. A value of 00 for IX indicates that the address is the address of the indicated next add field, while 01 instructs to increment by one the indicated next add field. An IX value of 10 instructs to increment by 2, and 11 to increment by 3. A single next add field allows for pointing onto up to 4 different instruction elements in memory thus reducing the size of the instruction itself. [0041]
[0042] The comparis on field stores, if any, the pattern(s) to compare to the bits starting at the current position in the A or B pattern field. For each pattern, a sub-field indicates the length of the pattern (Nbbits), a possible mask (Pattern Mask), the next address sub field (direct or indexed) to use or next comparison to perform when match or not match. The index method is the same as what is defined in the instruction field. It should be noted that, when the link is performed on another comparison field, the index field (IX) is irrelevant. [0042]
[0043] The next address field contains the list of addresses of the nodes connected to the branches of the current node. Consecutive addresses may be used but cannot always be used as in case of multiple branches, wherein an addresses may be followed by a single instruction and while others are followed by a dual instruction. [0043]
[0044] While the invention has been particularly shown and described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention. [0044]
权利要求:
Claims (16)
[1" id="US-20010007559-A1-CLM-00001] 1. A system for performing interleaved packet processing in a network router, wherein a packet to be routed includes a source address bit pattern and a destination address bit pattern that are each processed by a task processor in accordance with a data tree, said data tree including a plurality of nodes linked by branches wherein an instruction that is associated with each node within said data tree is utilized for determining which branch is to be taken in accordance with said source address bit pattern or said destination address bit pattern, said system comprising:
a first bank of registers for loading an instruction to be executed by said task processor at each node of said data tree in accordance with said source address bit pattern;
a second bank of registers for loading an instruction to be executed by said task processor at each node of said data tree in accordance with said destination address bit pattern; and
a task scheduler for enabling said first bank of registers to transfer an instruction loaded therein for processing by said task processor only during even time cycles and for enabling said second bank of registers to transfer an instruction loaded therein for processing by said task processor only during odd time cycles.
[2" id="US-20010007559-A1-CLM-00002] 2. The system of
claim 1 , wherein said task scheduler includes a clock signal generator that generates said even and odd time cycles in an alternating series of rising edges and falling edges.
[3" id="US-20010007559-A1-CLM-00003] 3. The system of
claim 1 , further comprising an address register for storing an address of a next instruction to be loaded into either said first bank of registers or said second bank of registers from a memory device before being executed by said task processor.
[4" id="US-20010007559-A1-CLM-00004] 4. The system of
claim 3 , wherein said address register further comprises a counter for incrementing said address of the next instruction in response to a dual instruction.
[5" id="US-20010007559-A1-CLM-00005] 5. The system of
claim 3 , wherein said memory includes instructions to be executed by said task processor.
[6" id="US-20010007559-A1-CLM-00006] 6. The system of
claim 5 , wherein said memory further comprises a first memory area containing normal size instructions and a second memory area containing dual size instructions.
[7" id="US-20010007559-A1-CLM-00007] 7. The system of
claim 1 , further comprising at least one temporary register for storing information from said task processor between two consecutive processing time cycles when such a processing lasts more than one time cycle.
[8" id="US-20010007559-A1-CLM-00008] 8. The system of
claim 8 , further comprising a 1-bit state register for each of said first and second bank of registers, said 1-bit state register being set when said processing lasts more than one time cycle.
[9" id="US-20010007559-A1-CLM-00009] 9. A method for performing interleaved packet processing in a network router, wherein a packet to be routed includes a source address bit pattern and a destination address bit pattern that are each processed by a task processor in accordance with a data tree, said data tree including a plurality of nodes linked by branches wherein an instruction that is associated with each node within said data tree is utilized for determining which branch is to be taken in accordance with said source address bit pattern or said destination address bit pattern, said method comprising:
loading into a first bank of registers an instruction to be executed by said task processor at each node of said data tree in accordance with said source address bit pattern;
loading into a second bank of registers an instruction to be executed by said task processor at each node of said data tree in accordance with said destination address bit pattern;
transferring an instruction from said first bank of registers to be processed by a task processor only during even time cycles; and
transferring an instruction from said second bank of registers to be processed by said task processor only during odd time cycles.
[10" id="US-20010007559-A1-CLM-00010] 10. The method of
claim 9 , further comprising generating said even and odd time cycles in an alternating series of rising edges and falling edges.
[11" id="US-20010007559-A1-CLM-00011] 11. The method of
claim 9 , further comprising storing an address of a next instruction to be loaded into either said first bank of registers or said second bank of registers from a memory device before being executed by said task processor.
[12" id="US-20010007559-A1-CLM-00012] 12. The method of
claim 11 , further comprising:
loading said address of said next instruction from said task processor into said first or second bank of registers;
transferring said address from said bank of registers to said address register;
reading said address from said address register; and
fetching said next instruction from said memory in response to said reading step.
[13" id="US-20010007559-A1-CLM-00013] 13. The method of
claim 11 , further comprising incrementing said address of the next instruction in response to a dual instruction.
[14" id="US-20010007559-A1-CLM-00014] 14. The method of
claim 13 , further comprising:
loading a dual instruction into either said first bank of registers or said second bank of registers; and
interrupting said loading step during one time cycle if said loading requires two time cycles; and
during said time cycle during which said loading is interrupted, loading an instruction into the other of said first or second bank of registers.
[15" id="US-20010007559-A1-CLM-00015] 15. The method of
claim 13 , further comprising:
processing a dual instruction utilizing said task processor;
interrupting said processing step during one time cycle if such a processing requires two time cycles; and
during said time cycle during which said processing is interrupted, executing an instruction provided by the other of said first or second bank of registers.
[16" id="US-20010007559-A1-CLM-00016] 16. The method of
claim 9 , further comprising storing information from said task processor between two consecutive processing time cycles when such a processing lasts more than one time cycle within at least one temporary register.
类似技术:
公开号 | 公开日 | 专利标题
US8295286B2|2012-10-23|Apparatus and method using hashing for efficiently implementing an IP lookup solution in hardware
US6526474B1|2003-02-25|Content addressable memory | with accesses to multiple CAM arrays used to generate result for various matching sizes
US6553002B1|2003-04-22|Apparatus and method for routing data packets through a communications network
US6600744B1|2003-07-29|Method and apparatus for packet classification in a data communication system
US7433871B2|2008-10-07|Efficient ipv4/ipv6 best matching prefix method and apparatus
US6665297B1|2003-12-16|Network routing table
US6289414B1|2001-09-11|Partially ordered cams used in ternary hierarchical address searching/sorting
US8150891B2|2012-04-03|System for IP address lookup using substring and prefix matching
US5438535A|1995-08-01|Content addressable memory system
US20050149513A1|2005-07-07|Compressed prefix tree structure and method for traversing a compressed prefix tree
JP2004194319A|2004-07-08|Apparatus and method for using completely configurable memory multi-stage pipeline logic and embedded type processor to realize multi-bit trie algorithm network search engine
Bando et al.2010|Flashtrie: Hash-based prefix-compressed trie for IP route lookup beyond 100Gbps
US20070194957A1|2007-08-23|Search apparatus and search management method for fixed-length data
US20050243827A1|2005-11-03|Lookup engine
US6772279B1|2004-08-03|Method and apparatus for monitoring the status of CAM comparand registers using a free list and a busy list
JP2004194321A|2004-07-08|Mechanism for reducing lookup latency in pipelined hardware implementation of trie-based ip lookup algorithm
US6804230B1|2004-10-12|Communication device with forwarding database having a trie search facility
WO2003044671A1|2003-05-30|Ram-based range content addressable memory
TW200301429A|2003-07-01|A method of improving the lookup performance of tree-type knowledge base searches
US7117196B2|2006-10-03|Method and system for optimizing leaf comparisons from a tree search
US7111289B2|2006-09-19|Method for implementing dual link list structure to enable fast link-list pointer updates
US7272685B2|2007-09-18|Integrated circuit with associated memory function
US6961337B2|2005-11-01|Interleaved processing system for processing frames within a network router
US6421660B1|2002-07-16|Enhanced searching method and apparatus for variable bit chains
SK3692000A3|2000-09-12|Method and system for fast routing lookups
同族专利:
公开号 | 公开日
US6961337B2|2005-11-01|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
US5917821A|1993-12-24|1999-06-29|Newbridge Networks Corporation|Look-up engine for packet-based network|
US6026473A|1996-12-23|2000-02-15|Intel Corporation|Method and apparatus for storing data in a sequentially written memory using an interleaving mechanism|US20030174717A1|2002-03-15|2003-09-18|Boris Zabarski|System and method for longest prefix match for internet protocol lookup|
US20040008711A1|2002-07-09|2004-01-15|Lahti Gregg D.|System and method for anti-replay processing of a data packet|
US20070091797A1|2005-10-26|2007-04-26|Cisco Technology, Inc.|Method and apparatus for fast 2-key scheduler implementation|
US20080253650A1|2005-06-20|2008-10-16|Nikon Corporation|Image Processing Device, Image Processing Method, Image Processing Program Product, and Image-Capturing Device|
US20110116507A1|2009-11-16|2011-05-19|Alon Pais|Iterative parsing and classification|
US20140137128A1|2012-11-12|2014-05-15|Skymedi Corporation|Method of Scheduling Tasks for Memories and Memory System Thereof|
US20150036545A1|2011-02-22|2015-02-05|Snu R&Db Foundation|Self-Construction System of Wireless Sensor Network and Method for Self-Construction of Wireless Sensor Network Using the Same|
法律状态:
2001-01-03| AS| Assignment|Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LE PENNEC, JEAN FRANCOIS;PIN, CLAUDE;BENAYOUN, ALAIN;AND OTHERS;REEL/FRAME:011419/0319;SIGNING DATES FROM 20001024 TO 20001027 |
2009-05-11| REMI| Maintenance fee reminder mailed|
2009-11-01| LAPS| Lapse for failure to pay maintenance fees|
2009-11-30| STCH| Information on status: patent discontinuation|Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
2009-12-22| FP| Lapsed due to failure to pay maintenance fee|Effective date: 20091101 |
优先权:
申请号 | 申请日 | 专利标题
EP00480005.8||2000-01-06||
EP00480005||2000-01-06||
[返回顶部]